3 research outputs found
A Call for Standardization and Validation of Text Style Transfer Evaluation
Text Style Transfer (TST) evaluation is, in practice, inconsistent.
Therefore, we conduct a meta-analysis on human and automated TST evaluation and
experimentation that thoroughly examines existing literature in the field. The
meta-analysis reveals a substantial standardization gap in human and automated
evaluation. In addition, we also find a validation gap: only few automated
metrics have been validated using human experiments. To this end, we thoroughly
scrutinize both the standardization and validation gap and reveal the resulting
pitfalls. This work also paves the way to close the standardization and
validation gap in TST evaluation by calling out requirements to be met by
future research.Comment: Accepted to Findings of ACL 202
Text Style Transfer Evaluation Using Large Language Models
Evaluating Text Style Transfer (TST) is a complex task due to its
multifaceted nature. The quality of the generated text is measured based on
challenging factors, such as style transfer accuracy, content preservation, and
overall fluency. While human evaluation is considered to be the gold standard
in TST assessment, it is costly and often hard to reproduce. Therefore,
automated metrics are prevalent in these domains. Nevertheless, it remains
unclear whether these automated metrics correlate with human evaluations.
Recent strides in Large Language Models (LLMs) have showcased their capacity to
match and even exceed average human performance across diverse, unseen tasks.
This suggests that LLMs could be a feasible alternative to human evaluation and
other automated metrics in TST evaluation. We compare the results of different
LLMs in TST using multiple input prompts. Our findings highlight a strong
correlation between (even zero-shot) prompting and human evaluation, showing
that LLMs often outperform traditional automated metrics. Furthermore, we
introduce the concept of prompt ensembling, demonstrating its ability to
enhance the robustness of TST evaluation. This research contributes to the
ongoing evaluation of LLMs in diverse tasks, offering insights into successful
outcomes and areas of limitation
Evaluating Dynamic Topic Models
There is a lack of quantitative measures to evaluate the progression of
topics through time in dynamic topic models (DTMs). Filling this gap, we
propose a novel evaluation measure for DTMs that analyzes the changes in the
quality of each topic over time. Additionally, we propose an extension
combining topic quality with the model's temporal consistency. We demonstrate
the utility of the proposed measure by applying it to synthetic data and data
from existing DTMs. We also conducted a human evaluation, which indicates that
the proposed measure correlates well with human judgment. Our findings may help
in identifying changing topics, evaluating different DTMs, and guiding future
research in this area